Zero–sum game

In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which a participant's gain (or loss) of utility is exactly balanced by the losses (or gains) of the utility of other participant(s). If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus cutting a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all participants value each unit of cake equally (see marginal utility). In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses are either less than or more than zero. A zero-sum game is also called a strictly competitive game. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality,[1] or with Nash equilibrium.

Contents

Definition

The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal (generally, any game where all strategies are Pareto optimal is called a conflict game).[2]

Situations where participants can all gain or suffer together are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players are sometimes more or less than what they began with.

Solution

For 2-player finite zero-sum games, the different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. In the solution, players play a mixed strategy.

Example

A zero-sum game
A B C
1 30, -30 -10, 10 20, -20
2 10, -10 20, -20 -20, 20

A game's payoff matrix is a convenient representation. Consider for example the two-player zero-sum game pictured at right.

The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices.

Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points.

Now, in this example game both players know the payoff matrix and attempt to maximize the number of their points. What should they do?

Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, while with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. But what happens if Blue anticipates Red's reasoning and choice of action 1, and goes for action B, so as to win 10 points? Or if Red in turn anticipates this devious trick and goes for action 2, so as to win 20 points after all?

Émile Borel and John von Neumann had the fundamental and surprising insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimise the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute provably optimal strategies for all two-player zero-sum games.

For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with probability 3/7, while Blue should assign the probabilities 0, 4/7, and 3/7 to the three actions A, B, and C. Red will then win 20/7 points on average per game.

Solving

The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix M where element M_{i,j} is the payoff obtained when the minimizing player chooses pure strategy i and the maximizing player chooses pure strategy j (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of M is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (see ref. [2], page 740) by solving the following linear program to find a vector u:

Minimize:
\sum_{i} u_i
Subject to the constraints:
u ≥ 0
M u ≥ 1.

The first constraint says each element of the u vector must be nonnegative, and the second constraint says each element of the  M u vector must be at least 1. For the resulting u vector, the inverse of the sum of its elements is the value of the game. Multiplying u by that value gives a probability vector, giving the probability that the maximizing player will choose each of the possible pure strategies.

If the game matrix does not have all positive elements, simply add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will have no effect on the equilibrium mixed strategies for the equilibrium.

The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Or, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of M (adding a constant so it's positive), then solving the resulting game.

If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations. So such games are equivalent to linear programs, in general.

Non-zero-sum

Economics

Many economic situations are not zero-sum, since valuable goods and services can be created, destroyed, or badly allocated, and any of these will create a net gain or loss. Assuming the counterparties are acting rationally with symmetric information, any commercial exchange is a non-zero-sum activity, because each party must consider the goods it is receiving as being at least fractionally more valuable than the goods it is delivering. Economic exchanges must benefit both parties enough above the zero-sum such that each party can overcome its transaction costs.

See also:

Psychology

The most common or simple example from the subfield of Social Psychology is the concept of "Social Traps". In some cases we can enhance our collective well-being by pursuing our personal interests — or parties can pursue mutually destructive behavior as they choose their own ends.

Complexity

It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. As former US President Bill Clinton states:

The more complex societies get and the more complex the networks of interdependence within and beyond community and national borders get, the more people are forced in their own interests to find non-zero-sum solutions. That is, win–win solutions instead of win–lose solutions.... Because we find as our interdependence increases that, on the whole, we do better when other people do better as well — so we have to find ways that we can all win, we have to accommodate each other....
—Bill Clinton, Wired interview, December 2000.[3]

Extensions

In 1944 John von Neumann and Oskar Morgenstern proved that any zero-sum game involving n players is in fact a generalized form of a zero-sum game for two players, and that any non-zero-sum game for n players can be reduced to a zero-sum game for n + 1 players; the (n + 1) player representing the global profit or loss.[4]

Misunderstandings

Zero–sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions. Furthermore, the word "game" does not imply the model is valid only for recreational games.[1]

References

  1. ^ a b Ken Binmore (2007). Playing for real: a text on game theory. Oxford University Press US. ISBN 9780195300574. http://books.google.com/books?id=eY0YhSk9ujsC. , chapters 1 & 7
  2. ^ Bowles, Samuel (2004). Microeconomics: Behavior, Institutions, and Evolution. Princeton University Press. pp. 33–36. ISBN 0-691-09163-3. 
  3. ^ "Wired 8.12: Bill Clinton". Wired.com. 2009-01-04. http://www.wired.com/wired/archive/8.12/clinton.html. Retrieved 2010-06-17. 
  4. ^ "Theory of Games and Economic Behavior". Princeton University Press (1953). (Digital publication date)2005-06-25. http://www.archive.org/stream/theoryofgamesand030098mbp#page/n70/mode/1up/search/reduce. Retrieved 2010-11-11. 

Further reading

External links